21 research outputs found

    EEG Movement Artifact Suppression in Interactive Virtual Reality

    Get PDF

    Decoding Lip Movements During Continuous Speech using Electrocorticography

    Get PDF

    A note on brain actuated spelling with the Berlin brain-computer interface

    Get PDF
    Brain-Computer Interfaces (BCIs) are systems capable of decoding neural activity in real time, thereby allowing a computer application to be directly controlled by the brain. Since the characteristics of such direct brain-tocomputer interaction are limited in several aspects, one major challenge in BCI research is intelligent front-end design. Here we present the mental text entry application ‘Hex-o-Spell’ which incorporates principles of Human-Computer Interaction research into BCI feedback design. The system utilises the high visual display bandwidth to help compensate for the extremely limited control bandwidth which operates with only two mental states, where the timing of the state changes encodes most of the information. The display is visually appealing, and control is robust. The effectiveness and robustness of the interface was demonstrated at the CeBIT 2006 (world’s largest IT fair) where two subjects operated the mental text entry system at a speed of up to 7.6 char/min

    P300 Speller Efficiency with Common Average Reference

    No full text

    外國文献

    Get PDF
    A Brain Computer Interface (BCI) is a device that allows the user to communicate with the world without utilizing voluntary muscle activity (i.e., using only the electrical activity of the brain). It makes use of the well-studied observation that the brain reacts differently to different stimuli, as a function of the level of attention allotted to the stimulus stream and the specific processing triggered by the stimulus. In this article we present a single trial independent component analysis (ICA) method that is working with a BCI system proposed by Farwell and Donchin. It can dramatically reduce the signal processing time and improve the data communicating rate. This ICA method achieved 76.67% accuracy on single trial P300 response identification

    The nested hierarchy of overt, mouthed, and imagined speech activity evident in intracranial recordings

    No full text
    Recent studies have demonstrated that it is possible to decode and synthesize various aspects of acoustic speech directly from intracranial measurements of electrophysiological brain activity. In order to continue progressing toward the development of a practical speech neuroprosthesis for the individuals with speech impairments, better understanding and modeling of imagined speech processes are required. The present study uses intracranial brain recordings from participants that performed a speaking task with trials consisting of overt, mouthed, and imagined speech modes, representing various degrees of decreasing behavioral output. Speech activity detection models are constructed using spatial, spectral, and temporal brain activity features, and the features and model performances are characterized and compared across the three degrees of behavioral output. The results indicate the existence of a hierarchy in which the relevant channels for the lower behavioral output modes form nested subsets of the relevant channels from the higher behavioral output modes. This provides important insights for the elusive goal of developing more effective imagined speech decoding models with respect to the better-established overt speech decoding counterparts
    corecore